Search Results: "bar"

10 November 2023

Jonathan Dowland: Plato document reader

Kobo Libra 2 Kobo Libra 2
text-handling in Plato text-handling in Plato
Until now, I haven't hacked my Kobo Libra 2 ereader, despite knowing it is a relatively open device. The default document reader (Nickel) does everything I need it to. Syncing the books via USB is tedious, but I don't do it that often. Via Videah's blog post My E-Reader Setup, I learned of Plato, an alternative document reader. Plato doesn't really offer any headline features that I need, but it cost me nothing to try it out, so I installed it (fairly painlessly) and launched it just once. The library view seems good, although I've not used it much: I picked a book and read it through1, and I'm 60% through another2. I tend to read one ebook at a time. The main reader interface is great: Just the text3. Page transitions are really, really fast. Tweaking the backlight intensity is a little slower than Nickel: menu-driven rather than an active scroll region (which is convenient in Nickel but easy to accidentally turn to 0% and hard to recover from in pitch black). Now that I've started down the road of hacking the Kobo, I think I will explore wifi-syncing the library, perhaps using a variation on the hook scripts shared in Videah's blog post.

  1. Venomous Lumpsucker by Ned Beauman. It's fantastic. Guardian review
  2. There Is No Antimemetics Division by qntm
  3. I do miss Nickel's tiny progress bar somewhat: the only non-text bit of UX I left turned on.

7 November 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 2)

TL;DR: This blog post explores the color capabilities of AMD hardware and how they are exposed to userspace through driver-specific properties. It discusses the different color blocks in the AMD Display Core Next (DCN) pipeline and their capabilities, such as predefined transfer functions, 1D and 3D lookup tables (LUTs), and color transformation matrices (CTMs). It also highlights the differences in AMD HW blocks for pre and post-blending adjustments, and how these differences are reflected in the available driver-specific properties. Overall, this blog post provides a comprehensive overview of the color capabilities of AMD hardware and how they can be controlled by userspace applications through driver-specific properties. This information is valuable for anyone who wants to develop applications that can take advantage of the AMD color management pipeline. Get a closer look at each hardware block s capabilities, unlock a wealth of knowledge about AMD display hardware, and enhance your understanding of graphics and visual computing. Stay tuned for future developments as we embark on a quest for GPU color capabilities in the ever-evolving realm of rainbow treasures.
Operating Systems can use the power of GPUs to ensure consistent color reproduction across graphics devices. We can use GPU-accelerated color management to manage the diversity of color profiles, do color transformations to convert between High-Dynamic-Range (HDR) and Standard-Dynamic-Range (SDR) content and color enhacements for wide color gamut (WCG). However, to make use of GPU display capabilities, we need an interface between userspace and the kernel display drivers that is currently absent in the Linux/DRM KMS API. In the previous blog post I presented how we are expanding the Linux/DRM color management API to expose specific properties of AMD hardware. Now, I ll guide you to the color features for the Linux/AMD display driver. We embark on a journey through DRM/KMS, AMD Display Manager, and AMD Display Core and delve into the color blocks to uncover the secrets of color manipulation within AMD hardware. Here we ll talk less about the color tools and more about where to find them in the hardware. We resort to driver-specific properties to reach AMD hardware blocks with color capabilities. These blocks display features like predefined transfer functions, color transformation matrices, and 1-dimensional (1D LUT) and 3-dimensional lookup tables (3D LUT). Here, we will understand how these color features are strategically placed into color blocks both before and after blending in Display Pipe and Plane (DPP) and Multiple Pipe/Plane Combined (MPC) blocks. That said, welcome back to the second part of our thrilling journey through AMD s color management realm!

AMD Display Driver in the Linux/DRM Subsystem: The Journey In my 2022 XDC talk I m not an AMD expert, but , I briefly explained the organizational structure of the Linux/AMD display driver where the driver code is bifurcated into a Linux-specific section and a shared-code portion. To reveal AMD s color secrets through the Linux kernel DRM API, our journey led us through these layers of the Linux/AMD display driver s software stack. It includes traversing the DRM/KMS framework, the AMD Display Manager (DM), and the AMD Display Core (DC) [1]. The DRM/KMS framework provides the atomic API for color management through KMS properties represented by struct drm_property. We extended the color management interface exposed to userspace by leveraging existing resources and connecting them with driver-specific functions for managing modeset properties. On the AMD DC layer, the interface with hardware color blocks is established. The AMD DC layer contains OS-agnostic components that are shared across different platforms, making it an invaluable resource. This layer already implements hardware programming and resource management, simplifying the external developer s task. While examining the DC code, we gain insights into the color pipeline and capabilities, even without direct access to specifications. Additionally, AMD developers provide essential support by answering queries and reviewing our work upstream. The primary challenge involved identifying and understanding relevant AMD DC code to configure each color block in the color pipeline. However, the ultimate goal was to bridge the DC color capabilities with the DRM API. For this, we changed the AMD DM, the OS-dependent layer connecting the DC interface to the DRM/KMS framework. We defined and managed driver-specific color properties, facilitated the transport of user space data to the DC, and translated DRM features and settings to the DC interface. Considerations were also made for differences in the color pipeline based on hardware capabilities.

Exploring Color Capabilities of the AMD display hardware Now, let s dive into the exciting realm of AMD color capabilities, where a abundance of techniques and tools await to make your colors look extraordinary across diverse devices. First, we need to know a little about the color transformation and calibration tools and techniques that you can find in different blocks of the AMD hardware. I borrowed some images from [2] [3] [4] to help you understand the information.

Predefined Transfer Functions (Named Fixed Curves): Transfer functions serve as the bridge between the digital and visual worlds, defining the mathematical relationship between digital color values and linear scene/display values and ensuring consistent color reproduction across different devices and media. You can learn more about curves in the chapter GPU Gems 3 - The Importance of Being Linear by Larry Gritz and Eugene d Eon. ITU-R 2100 introduces three main types of transfer functions:
  • OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
  • EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
  • OOTF: opto-optical transfer function, which has the role of applying the rendering intent .
AMD s display driver supports the following pre-defined transfer functions (aka named fixed curves):
  • Linear/Unity: linear/identity relationship between pixel value and luminance value;
  • Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
  • sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
  • BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
  • PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.

1D LUTs (1-dimensional Lookup Table): A 1D LUT is a versatile tool, defining a one-dimensional color transformation based on a single parameter. It s very well explained by Jeremy Selan at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations It enables adjustments to color, brightness, and contrast, making it ideal for fine-tuning. In the Linux AMD display driver, the atomic API offers a 1D LUT with 4096 entries and 8-bit depth, while legacy gamma uses a size of 256.

3D LUTs (3-dimensional Lookup Table): These tables work in three dimensions red, green, and blue. They re perfect for complex color transformations and adjustments between color channels. It s also more complex to manage and require more computational resources. Jeremy also explains 3D LUT at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations

CTM (Color Transformation Matrices): Color transformation matrices facilitate the transition between different color spaces, playing a crucial role in color space conversion.

HDR Multiplier: HDR multiplier is a factor applied to the color values of an image to increase their overall brightness.

AMD Color Capabilities in the Hardware Pipeline First, let s take a closer look at the AMD Display Core Next hardware pipeline in the Linux kernel documentation for AMDGPU driver - Display Core Next In the AMD Display Core Next hardware pipeline, we encounter two hardware blocks with color capabilities: the Display Pipe and Plane (DPP) and the Multiple Pipe/Plane Combined (MPC). The DPP handles color adjustments per plane before blending, while the MPC engages in post-blending color adjustments. In short, we expect DPP color capabilities to match up with DRM plane properties, and MPC color capabilities to play nice with DRM CRTC properties. Note: here s the catch there are some DRM CRTC color transformations that don t have a corresponding AMD MPC color block, and vice versa. It s like a puzzle, and we re here to solve it!

AMD Color Blocks and Capabilities We can finally talk about the color capabilities of each AMD color block. As it varies based on the generation of hardware, let s take the DCN3+ family as reference. What s possible to do before and after blending depends on hardware capabilities describe in the kernel driver by struct dpp_color_caps and struct mpc_color_caps. The AMD Steam Deck hardware provides a tangible example of these capabilities. Therefore, we take SteamDeck/DCN301 driver as an example and look at the Color pipeline capabilities described in the file: driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE.
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion  (CSC) matrix.
dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT).  Gamma correction  is the new one.
dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use)
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support
dc->caps.color.dpp.post_csc = 1; // CSC matrix
dc->caps.color.dpp.gamma_corr = 1; // New  Gamma Correction  block for degamma user LUT;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve. 
dc->caps.color.dpp.ogam_ram = 1; //  Blend Gamma  block for custom curve just after blending
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported)
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve)
dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma.
// No pre-defined TF supported for regamma.
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.
I included some inline comments in each element of the color caps to quickly describe them, but you can find the same information in the Linux kernel documentation. See more in struct dpp_color_caps, struct mpc_color_caps and struct rom_curve_caps. Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more about mapping driver-specific properties to corresponding color blocks.

DPP Color Pipeline: Before Blending (Per Plane) Let s explore the capabilities of DPP blocks and what you can achieve with a color block. The very first thing to pay attention is the display architecture of the display hardware: previously AMD uses a display architecture called DCE
  • Display and Compositing Engine, but newer hardware follows DCN - Display Core Next.
The architectute is described by: dc->caps.color.dpp.dcn_arch

AMD Plane Degamma: TF and 1D LUT Described by: dc->caps.color.dpp.dgam_ram, dc->caps.color.dpp.dgam_rom_caps,dc->caps.color.dpp.gamma_corr AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It is utilized to transition from scanout/encoded values to linear values for arithmetic operations. Plane Degamma supports both pre-defined transfer functions and 1D LUTs, depending on the hardware generation. DCN2 and older families handle both types of curve in the Degamma RAM block (dc->caps.color.dpp.dgam_ram); DCN3+ separate hardcoded curves and 1D LUT into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps) and Gamma correction block (dc->caps.color.dpp.gamma_corr), respectively. Pre-defined transfer functions:
  • they are hardcoded curves (read-only memory - ROM);
  • supported curves: sRGB EOTF, BT.709 inverse OETF, PQ EOTF and HLG OETF, Gamma 2.2, Gamma 2.4 and Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. Setting TF = Identity/Default and LUT as NULL means bypass. References:

AMD Plane 3x4 CTM (Color Transformation Matrix) AMD Plane CTM data goes to the DPP Gamut Remap block, supporting a 3x4 fixed point (s31.32) matrix for color space conversions. The data is interpreted as a struct drm_color_ctm_3x4. Setting NULL means bypass. References:

AMD Plane Shaper: TF + 1D LUT Described by: dc->caps.color.dpp.hw_3d_lut The Shaper block fine-tunes color adjustments before applying the 3D LUT, optimizing the use of the limited entries in each dimension of the 3D LUT. On AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for delinearizing and/or normalizing the color space before applying a 3D LUT, so this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut means support for both shaper 1D LUT and 3D LUT. Pre-defined transfer function enables delinearizing content with or without shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper curves go from linear values to encoded values. If we are already in a non-linear space and/or don t need to normalize values, we can set a Identity TF for shaper that works similar to bypass and is also the default TF value. Pre-defined transfer functions:
  • there is no DPP Shaper ROM. Curves are calculated by AMD color modules. Check calculate_curve() function in the file amd/display/modules/color/color_gamma.c.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting Plane Shaper TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT as NULL works as bypass. References:

AMD Plane 3D LUT Described by: dc->caps.color.dpp.hw_3d_lut The 3D LUT in the DPP block facilitates complex color transformations and adjustments. 3D LUT is a three-dimensional array where each element is an RGB triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut describe if DPP 3D LUT is supported. The AMD driver-specific property advertise the size of a single dimension via LUT3D_SIZE property. Plane 3D LUT is a blog property where the data is interpreted as an array of struct drm_color_lut elements and the number of entries is LUT3D_SIZE cubic. The array contains samples from the approximated function. Values between samples are estimated by tetrahedral interpolation The array is accessed with three indices, one for each input dimension (color channel), blue being the outermost dimension, red the innermost. This distribution is better visualized when examining the code in [RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+	for (nib = 0; nib < 17; nib++)  
+		for (nig = 0; nig < 17; nig++)  
+			for (nir = 0; nir < 17; nir++)  
+				ind_lut = 3 * (nib + 17*nig + 289*nir);
+
+				rgb_area[ind].red = rgb_lib[ind_lut + 0];
+				rgb_area[ind].green = rgb_lib[ind_lut + 1];
+				rgb_area[ind].blue = rgb_lib[ind_lut + 2];
+				ind++;
+			 
+		 
+	 
In our driver-specific approach we opted to advertise it s behavior to the userspace instead of implicitly dealing with it in the kernel driver. AMD s hardware supports 3D LUTs with 17-size or 9-size (4913 and 729 entries respectively), and you can choose between 10-bit or 12-bit. In the current driver-specific work we focus on enabling only 17-size 12-bit 3D LUT, as in [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support:
+		/* Stride and bit depth are not programmable by API yet.
+		 * Therefore, only supports 17x17x17 3D LUT (12-bit).
+		 */
+		lut->lut_3d.use_tetrahedral_9 = false;
+		lut->lut_3d.use_12bits = true;
+		lut->state.bits.initialized = 1;
+		__drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d,
+					lut->lut_3d.use_tetrahedral_9,
+					MAX_COLOR_3DLUT_BITDEPTH);
A refined control of 3D LUT parameters should go through a follow-up version or generic API. Setting 3D LUT to NULL means bypass. References:

AMD Plane Blend/Out Gamma: TF + 1D LUT Described by: dc->caps.color.dpp.ogam_ram The Blend/Out Gamma block applies the final touch-up before blending, allowing users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are sandwich the 3D LUT. So, if we don t need 3D LUT transformations, we may want to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend. Pre-defined transfer function:
  • there is no DPP Blend ROM. Curves are calculated by AMD color modules;
  • supported curves: Identity, sRGB EOTF, BT.709 inverse OETF, PQ EOTF, HLG inverse OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. If plane_blend_tf_property != Identity TF, AMD color module will combine the user LUT values with pre-defined TF into the LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

MPC Color Pipeline: After Blending (Per CRTC)

DRM CRTC Degamma 1D LUT The degamma lookup table (LUT) for converting framebuffer pixel data before apply the color conversion matrix. The data is interpreted as an array of struct drm_color_lut elements. Setting NULL means bypass. Not really supported. The driver is currently reusing the DPP degamma LUT block (dc->caps.color.dpp.dgam_ram and dc->caps.color.dpp.gamma_corr) for supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32] drm/amd/display: reject atomic commit if setting both plane and CRTC degamma.

DRM CRTC 3x3 CTM Described by: dc->caps.color.mpc.gamut_remap It sets the current transformation matrix (CTM) apply to pixel data after the lookup through the degamma LUT and before the lookup through the gamma LUT. The data is interpreted as a struct drm_color_ctm. Setting NULL means bypass.

DRM CRTC Gamma 1D LUT + AMD CRTC Gamma TF Described by: dc->caps.color.mpc.ogam_ram After all that, you might still want to convert the content to wire encoding. No worries, in addition to DRM CRTC 1D LUT, we ve got a AMD CRTC gamma transfer function (TF) to make it happen. Possible TF values are defined by enum amdgpu_transfer_function. Pre-defined transfer functions:
  • there is no MPC Gamma ROM. Curves are calculated by AMD color modules.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting CRTC Gamma TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

Others

AMD CRTC Shaper and 3D LUT We have previously worked on exposing CRTC shaper and CRTC 3D LUT, but they were removed from the AMD driver-specific color series because they lack userspace case. CRTC shaper and 3D LUT works similar to plane shaper and 3D LUT but after blending (MPC block). The difference here is that setting (not bypass) Shaper and Gamma blocks together are not expected, since both blocks are used to delinearize the input space. In summary, we either set Shaper + 3D LUT or Gamma.

Input and Output Color Space Conversion There are two other color capabilities of AMD display hardware that were integrated to DRM by previous works and worth a brief explanation here. The DC Input CSC sets pre-defined coefficients from the values of DRM plane color_range and color_encoding properties. It is used for color space conversion of the input content. On the other hand, we have de DC Output CSC (OCSC) sets pre-defined coefficients from DRM connector colorspace properties. It is uses for color space conversion of the composed image to the one supported by the sink. References:

The search for rainbow treasures is not over yet If you want to understand a little more about this work, be sure to watch Joshua and I presented two talks at XDC 2023 about AMD/Steam Deck colors on Gamescope: In the time between the first and second part of this blog post, Uma Shashank and Chaitanya Kumar Borah published the plane color pipeline for Intel and Harry Wentland implemented a generic API for DRM based on VKMS support. We discussed these two proposals and the next steps for Color on Linux during the Color Management workshop at XDC 2023 and I briefly shared workshop results in the 2023 XDC lightning talk session. The search for rainbow treasures is not over yet! We plan to meet again next year in the 2024 Display Hackfest in Coru a-Spain (Igalia s HQ) to keep up the pace and continue advancing today s display needs on Linux. Finally, a HUGE thank you to everyone who worked with me on exploring AMD s color capabilities and making them available in userspace.

5 November 2023

Thorsten Alteholz: My Debian Activities in October 2023

FTP master This month I accepted 361 and rejected 34 packages. The overall number of packages that got accepted was 362. Debian LTS This was my hundred-twelfth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded: Unfortunately upstream still could not resolve whether the patch for CVE-2023-42118 of libspf2 is valid, so no progress happened here.
I also continued to work on bind9 and try to understand why some tests fail. Last but not least I did some days of frontdesk duties and took part in the LTS meeting. Debian ELTS This month was the sixty-third ELTS month. During my allocated time I uploaded: I also continued to work on bind9 and as with the version in LTS, I try to understand why some tests fail. Last but not least I did some days of frontdesk duties . Debian Printing This month I uploaded a new upstream version of: Within the context of preserving old printing packages, I adopted: If you know of any other package that is also needed and still maintained by the QA team, please tell me. I also uploaded new upstream version of packages or uploaded a package to fix one or the other issue: This work is generously funded by Freexian! Debian Mobcom This month I uploaded a package to fix one or the other issue: Other stuff This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

31 October 2023

Iustin Pop: Raspberry PI OS: upgrading and cross-grading

One of the downsides of running Raspberry PI OS is the fact that - not having the resources of pure Debian - upgrades are not recommended, and cross-grades (migrating between armhf and arm64) is not even mentioned. Is this really true? It is, after all a Debian-based system, so it should in theory be doable. Let s try!

Upgrading The recently announced release based on Debian Bookworm here says:
We have always said that for a major version upgrade, you should re-image your SD card and start again with a clean image. In the past, we have suggested procedures for updating an existing image to the new version, but always with the caveat that we do not recommend it, and you do this at your own risk. This time, because the changes to the underlying architecture are so significant, we are not suggesting any procedure for upgrading a Bullseye image to Bookworm; any attempt to do this will almost certainly end up with a non-booting desktop and data loss. The only way to get Bookworm is either to create an SD card using Raspberry Pi Imager, or to download and flash a Bookworm image from here with your tool of choice.
Which means, it s time to actually try it turns out it s actually trivial, if you use RPIs as headless servers. I had only three issues:
  • if using an initrd, the new initrd-building scripts/hooks are looking for some binaries in /usr/bin, and not in /bin; solution: install manually the usrmerge package, and then re-run dpkg --configure -a;
  • also if using an initrd, the scripts are looking for the kernel config file in /boot/config-$(uname -r), and the raspberry pi kernel package doesn t provide this; workaround: modprobe configs && zcat /proc/config.gz > /boot/config-$(uname -r);
  • and finally, on normal RPI systems, that don t use manual configurations of interfaces in /etc/network/interface, migrating from the previous dhcpcd to NetworkManager will break network connectivity, and require you to log in locally and fix things.
I expect most people to hit only the 3rd, and almost no-one to use initrd on raspberry pi. But, overall, aside from these two issues and a couple of cosmetic ones (login.defs being rewritten from scratch and showing a baffling diff, for example), it was easy. Is it worth doing? Definitely. Had no data loss, and no non-booting system.

Cross-grading (32 bit to 64 bit userland) This one is actually painful. Internet searches go from it s possible, I think to it s definitely not worth trying . Examples: Aside from these, there are a gazillion other posts about switching the kernel to 64 bit. And that s worth doing on its own, but it s only half the way. So, armed with two different systems - a RPI4 4GB and a RPI Zero W2 - I tried to do this. And while it can be done, it takes many hours - first system was about 6 hours, second the same, and a third RPI4 probably took ~3 hours only since I knew the problematic issues. So, what are the steps? Basically:
  • install devscripts, since you will need dget
  • enable new architecture in dpkg: dpkg --add-architecture arm64
  • switch over apt sources to include the 64 bit repos, which are different than the 32 bit ones (Raspberry PI OS did a migration here; normally a single repository has all architectures, of course)
  • downgrade all custom rpi packages/libraries to the standard bookworm/bullseye version, since dpkg won t usually allow a single library package to have different versions (I think it s possible to override, but I didn t bother)
  • install libc for the arm64 arch (this takes some effort, it s actually a set of 3-4 packages)
  • once the above is done, install whiptail:amd64 and rejoice at running a 64-bit binary!
  • then painfully go through sets of packages and migrate the set to arm64:
    • sometimes this work via apt, sometimes you ll need to use dget and dpkg -i
    • make sure you download both the armhf and arm64 versions before doing dpkg -i, since you ll need to rollback some installs
  • at one point, you ll be able to switch over dpkg and apt to arm64, at which point the default architecture flips over; from here, if you ve done it at the right moment, it becomes very easy; you ll probably need an apt install --fix-broken, though, at first
  • and then, finish by replacing all packages with arm64 versions
  • and then, dpkg --remove-architecture armhf, reboot, and profit!
But it s tears and blood to get to that point

Pain point 1: RPI custom versions of packages Since the 32bit armhf architecture is a bit weird - having many variations - it turns out that raspberry pi OS has many packages that are very slightly tweaked to disable a compilation flag or work around build/test failures, or whatnot. Since we talk here about 64-bit capable processors, almost none of these are needed, but they do make life harder since the 64 bit version doesn t have those overrides. So what is needed would be to say downgrade all armhf packages to the version in debian upstream repo , but I couldn t find the right apt pinning incantation to do that. So what I did was to remove the 32bit repos, then use apt-show-versions to see which packages have versions that are no longer in any repo, then downgrade them. There s a further, minor, complication that there were about 3-4 packages with same version but different hash (!), which simply needed apt install --reinstall, I think.

Pain point 2: architecture independent packages There is one very big issue with dpkg in all this story, and the one that makes things very problematic: while you can have a library package installed multiple times for different architectures, as the files live in different paths, a non-library package can only be installed once (usually). For binary packages (arch:any), that is fine. But architecture-independent packages (arch:all) are problematic since usually they depend on a binary package, but they always depend on the default architecture version! Hrmm, and I just realise I don t have logs from this, so I m only ~80% confident. But basically:
  • vim-solarized (arch:all) depends on vim (arch:any)
  • if you replace vim armhf with vim arm64, this will break vim-solarized, until the default architecture becomes arm64
So you need to keep track of which packages apt will de-install, for later re-installation. It is possible that Multi-Arch: foreign solves this, per the debian wiki which says:
Note that even though Architecture: all and Multi-Arch: foreign may look like similar concepts, they are not. The former means that the same binary package can be installed on different architectures. Yet, after installation such packages are treated as if they were native architecture (by definition the architecture of the dpkg package) packages. Thus Architecture: all packages cannot satisfy dependencies from other architectures without being marked Multi-Arch foreign.
It also has warnings about how to properly use this. But, in general, not many packages have it, so it is a problem.

Pain point 3: remove + install vs overwrite It seems that depending on how the solver computes a solution, when migrating a package from 32 to 64 bit, it can choose either to:
  • overwrite in place the package (akin to dpkg -i)
  • remove + install later
The former is OK, the later is not. Or, actually, it might be that apt never can do this, for example (edited for brevity):
# apt install systemd:arm64 --no-install-recommends
The following packages will be REMOVED:
  systemd
The following NEW packages will be installed:
  systemd:arm64
0 upgraded, 1 newly installed, 1 to remove and 35 not upgraded.
Do you want to continue? [Y/n] y
dpkg: systemd: dependency problems, but removing anyway as you requested:
 systemd-sysv depends on systemd.
Removing systemd (247.3-7+deb11u2) ...
systemd is the active init system, please switch to another before removing systemd.
dpkg: error processing package systemd (--remove):
 installed systemd package pre-removal script subprocess returned error exit status 1
dpkg: too many errors, stopping
Errors were encountered while processing:
 systemd
Processing was halted because there were too many errors.
But at the same time, overwrite in place is all good - via dpkg -i from /var/cache/apt/archives. In this case it manifested via a prerm script, in other cases is manifests via dependencies that are no longer satisfied for packages that can t be removed, etc. etc. So you will have to resort to dpkg -i a lot.

Pain point 4: lib- packages that are not lib During the whole process, it is very tempting to just go ahead and install the corresponding arm64 package for all armhf lib package, in one go, since these can coexist. Well, this simple plan is complicated by the fact that some packages are named libfoo-bar, but are actual holding (e.g.) the bar binary for the libfoo package. Examples:
  • libmagic-mgc contains /usr/lib/file/magic.mgc, which conflicts between the 32 and 64 bit versions; of course, it s the exact same file, so this should be an arch:all package, but
  • libpam-modules-bin and liblockfile-bin actually contain binaries (per the -bin suffix)
It s possible to work around all this, but it changes a 1 minute:
# apt install $(dpkg -i   grep ^ii   awk ' print $2 ' grep :amrhf sed -e 's/:armhf/:arm64')
into a 10-20 minutes fight with packages (like most other steps).

Is it worth doing? Compared to the simple bullseye bookworm upgrade, I m not sure about this. The result? Yes, definitely, the system feels - weirdly - much more responsive, logged in over SSH. I guess the arm64 base architecture has some more efficient ops than the lowest denominator armhf , so to say (e.g. there was in the 32 bit version some rpi-custom package with string ops), and thus migrating to 64 bit makes more things faster , but this is subjective so it might be actually not true. But from the point of view of the effort? Unless you like to play with dpkg and apt, and understand how these work and break, I d rather say, migrate to ansible and automate the deployment. It s doable, sure, and by the third system, I got this nailed down pretty well, but it was a lot of time spent. The good aspect is that I did 3 migrations:
  • rpi zero w2: bullseye 32 bit to 64 bit, then bullseye to bookworm
  • rpi 4: bullseye to bookworm, then bookworm 32bit to 64 bit
  • same, again, for a more important system
And all three worked well and no data loss. But I m really glad I have this behind me, I probably wouldn t do a fourth system, even if forced And now, waiting for the RPI 5 to be available See you!

Russell Coker: Links October 2023

The Daily Kos has an interesting article about a new more effective method of desalination [1]. Here is a video of a crazy guy zapping things with 100 car batteries [2]. This is sonmething you should avoid if you want to die of natural causes. Does dying while making a science video count for a Darwin Award? A Hacker News comment has an interesting explanation of Unix signals [3]. Interesting documentary on the rise of mega corporations [4]. We need to split up Google, Facebook, and Amazon ASAP. Also every phone platform should have competing app stores. Dave Taht gave an interesting LCA lecture about Internet congestion control [5]. He also referenced a web site about projects to alleviate the buffer bloat problem [6]. This tiny event based sensor is an interesting product [7]. It could lead to some interesting (but possibly invasive) technological developments in phones. Tara Barnett s Everything Open lecture Swiss Army GLAM had some interesting ideas for community software development [8]. Having lots of small programs communicating with APIs is an interesting way to get people into development. Actually Hardcore Overclocking has an interesting youtube video about the differences between x8 and x14 DDR4 DIMMs [9]. Interesting YouTube video from someone who helped the Kurds defend against Turkey about how war tunnels work [10]. He makes a strong case that the Israeli invasion of the Gaza Strip won t be easy or pleasant.

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

20 October 2023

Iustin Pop: How to set a per-app locale in MacOS

After spending ~20+ years with a Linux desktop, I m trying to expand my desktop setup to include MacOS (well, desktop/laptop, I mean end user in general). And to my surprise, there s no clear repository of MacOS info. Man pages yes, some StackOverflow, some Apple forums, but no canonical version. Or, I didn t find it, please enlighten me Another issue is that Apple apparently changes behaviour without clearly documenting it. In this specific case, the region part of the locale went through significant churn lately. So, my goal: In Linux, this would simply mean running the app with the correct environment variables. But MacOS deprecated this a while back (it used to work). After reading what I could, the solution is quite easy, just not obvious:
% defaults read .GlobalPreferences grep en_
    AKLastLocale = "en_CH";
    AppleLocale = "en_CH";
% defaults read -app FooBar
(has no AppleLocale key)
% defaults write -app FooBar AppleLocale en_US
And that s it. Now, the defaults man page says the global-global is NSGlobalDomain, I don t know where I got the .GlobalPreferences. But I only needed to know the key name (in this case, AppleLocale - of course it couldn t be LC_ALL/LANG). One day I ll know MacOS better, but I try to learn more for 2+ years now, and it s not a smooth ride. Old dog new tricks, right?

19 October 2023

Russ Allbery: Review: The Cassini Division

Review: The Cassini Division, by Ken MacLeod
Series: Fall Revolution #3
Publisher: Tor
Copyright: 1998
Printing: August 2000
ISBN: 0-8125-6858-3
Format: Mass market
Pages: 305
The Cassini Division is the third book in the Fall Revolution series and a fairly direct sequel (albeit with different protagonists) to The Stone Canal. This is not a good place to start the series. It's impossible to talk about the plot of this book without discussing the future history of this series, which arguably includes some spoilers for The Star Fraction and The Stone Canal. I don't think the direction of history matters that much in enjoying the previous books, but read the first two books of the series before this review if you want to avoid all spoilers. When the Outwarders uploaded themselves and went fast, they did a lot of strange things: an interstellar probe contrary to all known laws of physics, the disassembly of Ganymede, and the Malley Mile, which plays a significant role in The Stone Canal. They also crashed the Earth. This was not entirely their fault. There were a lot of politics, religious fundamentalism, and plagues in play as well. But the storm of viruses broadcast from their transformed Jupiter shut down essentially all computing equipment on Earth, which set off much of the chaos. The results were catastrophic, and also politically transformative. Now, the Solar Union is a nearly unified anarchosocialist society, with only scattered enclaves of non-cooperators left outside that structure. Ellen May Ngewthu is a leader of the Cassini Division, the bulwark that stands between humans and the Outwarders. The Division ruthlessly destroys any remnant or probe that dares rise out of Jupiter's atmosphere, ensuring that the Outwarders, whatever they have become after untold generations of fast evolution, stay isolated to the one planet they have absorbed. The Division is very good at what they do. But there is a potential gap in that line of defense: there are fast folk in storage at the other end of the Malley Mile, on New Mars, and who knows what the deranged capitalists there will do or what forces they might unleash. The one person who knows a path through the Malley Mile isn't talking, so Ellen goes in search of the next best thing: the non-cooperator scientist Isambard Kingdom Malley. I am now thoroughly annoyed at how politics are handled in this series, and much less confused by the frequency with which MacLeod won Prometheus Awards from the Libertarian Futurist Society. Some of this is my own fault for having too high of hopes for political SF, but nothing in this series so far has convinced me that MacLeod is seriously engaging with political systems. Instead, the world-building to date makes the classic libertarian mistake of thinking societies will happily abandon stability and predictability in favor of their strange definition of freedom. The Solar Union is based on what Ellen calls the true knowledge, which is worth quoting in full so that you know what kind of politics we're talking about:
Life is a process of breaking down and using other matter, and if need be, other life. Therefore, life is aggression, and successful life is successful aggression. Life is the scum of matter, and people are the scum of life. There is nothing but matter, forces, space and time, which together make power. Nothing matters, except what matters to you. Might makes right, and power makes freedom. You are free to do whatever is in your power, and if you want to survive and thrive you had better do whatever is in your interests. If your interests conflict with those of others, let the others pit their power against yours, everyone for theirselves. If your interests coincide with those of others, let them work together with you, and against the rest. We are what we eat, and we eat everything. All that you really value, and the goodness and truth and beauty of life, have their roots in this apparently barren soil. This is the true knowledge. We had founded our idealism on the most nihilistic implications of science, our socialism on crass self-interest, our peace on our capacity for mutual destruction, and our liberty on determinism. We had replaced morality with convention, bravery with safety, frugality with plenty, philosophy with science, stoicism with anaesthetics and piety with immortality. The universal acid of the true knowledge had burned away a world of words, and exposed a universe of things. Things we could use.
This is certainly something that some people will believe, particularly cynical college students who love political theory, feeling smarter than other people, and calling their pet theories things like "the true knowledge." It is not even remotely believable as the governing philosophy of a solar confederation. The point of government for the average person in human society is to create and enforce predictable mutual rules that one can use as a basis for planning and habits, allowing you to not think about politics all the time. People who adore thinking about politics have great difficulty understanding how important it is to everyone else to have ignorable government. Constantly testing your power against other coalitions is a sport, not a governing philosophy. Given the implication that this testing is through violence or the threat of violence, it beggars belief that any large number of people would tolerate that type of instability for an extended period of time. Ellen is fully committed to the true knowledge. MacLeod likely is not; I don't think this represents the philosophy of the author. But the primary political conflict in this novel famous for being political science fiction is between the above variation of anarchy and an anarchocapitalist society, neither of which are believable as stable political systems for large numbers of people. This is a bit like seeking out a series because you were told it was about a great clash of European monarchies and discovering it was about a fight between Liberland and Sealand. It becomes hard to take the rest of the book seriously. I do realize that one point of political science fiction is to play with strange political ideas, similar to how science fiction plays with often-implausible science ideas. But those ideas need some contact with human nature. If you're going to tell me that the key to clawing society back from a world-wide catastrophic descent into chaos is to discard literally every social system used to create predictability and order, you had better be describing aliens, because that's not how humans work. The rest of the book is better. I am untangling a lot of backstory for the above synopsis, which in the book comes in dribs and drabs, but piecing that together is good fun. The plot is far more straightforward than the previous two books in the series: there is a clear enemy, a clear goal, and Ellen goes from point A to point B in a comprehensible way with enough twists to keep it interesting. The core moral conflict of the book is that Ellen is an anti-AI fanatic to the point that she considers anyone other than non-uploaded humans to be an existential threat. MacLeod gives the reader both reasons to believe Ellen is right and reasons to believe she's wrong, which maintains an interesting moral tension. One thing that MacLeod is very good at is what Bob Shaw called "wee thinky bits." I think my favorite in this book is the computer technology used by the Cassini Division, who have spent a century in close combat with inimical AI capable of infecting any digital computer system with tailored viruses. As a result, their computers are mechanical non-Von-Neumann machines, but mechanical with all the technology of a highly-advanced 24th century civilization with nanometer-scale manufacturing technology. It's a great mental image and a lot of fun to think about. This is the only science fiction novel that I can think of that has a hard-takeoff singularity that nonetheless is successfully resisted and fought to a stand-still by unmodified humanity. Most writers who were interested in the singularity idea treated it as either a near-total transformation leaving only remnants or as something that had to be stopped before it started. MacLeod realizes that there's no reason to believe a post-singularity form of life would be either uniform in intent or free from its own baffling sudden collapses and reversals, which can be exploited by humans. It makes for a much better story. The sociology of this book is difficult to swallow, but the characterization is significantly better than the previous books of the series and the plot is much tighter. I was too annoyed by the political science to fully enjoy it, but that may be partly the fault of my expectations coming in. If you like chewy, idea-filled science fiction with a lot of unexplained world-building that you have to puzzle out as you go, you may enjoy this, although unfortunately I think you need to read at least The Stone Canal first. The ending was a bit unsatisfying, but even that includes some neat science fiction ideas. Followed by The Sky Road, although I understand it is not a straightforward sequel. Rating: 6 out of 10

16 October 2023

Scarlett Gately Moore: KDE: Debian: Hopefully a short goodbye for now.

KDE MascotKDE Mascot
I have been working around the clock and over the weekend trying to get the transition for snapcraft files in their respective repos. What does this mean for users? Faster releases for Snaps and closer collaboration between snapcrafters and application developers so bugs get resolved much quicker. Unfortunately, I have 2 days to finish before my internet gets cut off. I did not make enough to pay the bill. Seeing as this is the first time in a year, I am absolutely, positively grateful for all of you and your support over the past year. I know my work is appreciated! I will never be homeless or starve due to my wonderful local community, but the Internet bill is not something we can barter or trade labor for. I have caught up on my Debian obligations ( so no MIA needed! ) KDE neon is in good hands with Jonathan and Carlos. So for now, farewell ( I assure you I will be back! ) https://gofund.me/b8b69e54

12 October 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2023 (by Santiago Ruano Rinc n)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 21 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 10.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 4.0h to the next month.
  • Adrian Bunk did 7.0h (out of 17.0h assigned), thus carrying over 10.0h to the next month.
  • Anton Gladky did 9.5h (out of 7.5h assigned and 7.5h from previous period), thus carrying over 5.5h to the next month.
  • Bastien Roucari s did 16.0h (out of 15.5h assigned and 1.5h from previous period), thus carrying over 1.0h to the next month.
  • Ben Hutchings did 17.0h (out of 17.0h assigned).
  • Chris Lamb did 17.0h (out of 17.0h assigned).
  • Emilio Pozuelo Monfort did 30.0h (out of 30.0h assigned).
  • Guilhem Moulin did 18.25h (out of 18.25h assigned).
  • Helmut Grohne did 10.0h (out of 10.0h assigned).
  • Lee Garrett did 17.0h (out of 16.5h assigned and 0.5h from previous period).
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 4.5h (out of 0h assigned and 24.0h from previous period), thus carrying over 19.5h to the next month.
  • Roberto C. S nchez did 5.0h (out of 12.0h assigned), thus carrying over 7.0h to the next month.
  • Santiago Ruano Rinc n did 7.75h (out of 16.0h assigned), thus carrying over 8.25h to the next month.
  • Sean Whitton did 7.0h (out of 7.0h assigned).
  • Sylvain Beucler did 10.5h (out of 17.0h assigned), thus carrying over 6.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 13.25h (out of 16.0h assigned), thus carrying over 2.75h to the next month.

Evolution of the situation In September, we have released 44 DLAs. The month of September was a busy month for the LTS Team. A notable security issue fixed in September was the high-severity CVE-2023-4863, a heap buffer overflow that allowed remote attackers to perform an out-of-bounds memory write via a crafted WebP file. This CVE was covered by the three DLAs of different packages: firefox-esr, libwebp and thunderbird. The libwebp backported patch was sent to upstream, who adapted and applied it to the 0.6.1 branch. It is also worth noting that LTS contributor Markus Koschany included in his work updates to packages in Debian Bullseye and Bookworm, that are under the umbrella of the Security Team: xrdp, jetty9 and mosquitto. As every month, there was important behind-the-scenes work by the Front Desk staff, who triaged, analyzed and reviewed dozens of vulnerabilities, to decide if they warrant a security update. This is very important work, since we need to trade-off between the frequency of updates and the stability of the LTS release.

Thanks to our sponsors Sponsors that joined recently are in bold.

10 October 2023

Julian Andres Klode: Divergence - A case for different upgrade approaches

APT currently knows about three types of upgrades: All of these upgrade types are necessary to deal with upgrades within a distribution release. Yes, sometimes even removals may be needed because bug fixes require adding a Conflicts somewhere. In Ubuntu we have a third type of upgrades, handled by a separate tool: release upgrades. ubuntu-release-upgrader changes your sources.list, and applies various quirks to the upgrade. In this post, I want to look not at the quirk aspects but discuss how dependency solving should differ between intra-release and inter-release upgrades. Previous solver projects (such as Mancoosi) operated under the assumption that minimizing the number of changes performed should ultimately be the main goal of a solver. This makes sense as every change causes risks. However it ignores a different risk, which especially applies when upgrading from one distribution release to a newer one: Increasing divergence from the norm. Consider a person installs foo in Debian 12. foo depends on a b, so a will be automatically installed to satisfy the dependency. A release later, a has some known issues and b is prefered, the dependency now reads: b a. A classic solver would continue to keep a installed because it was installed before, leading upgraded installs to have foo, a installed whereas new systems have foo, b installed. As systems get upgraded over and over, they continue to diverge further and further from new installs to the point that it adds substantial support effort. My proposal for the new APT solver is that when we perform release upgrades, we forget which packages where previously automatically installed. We effectively perform a normalization: All systems with the same set of manually installed packages will end up with the same set of automatically installed packages. Consider the solving starting with an empty set and then installing the latest version of each previously manually installed package: It will see now that foo depends b a and install b (and a will be removed later on as its not part of the solution). Another case of divergence is Suggests handling. Consider that foo also Suggests s. You now install another package bar that depends s, hence s gets installed. Upon removing bar, s is not being removed automatically because foo still suggests it (and you may have grown used to foo s integration of s). This is because apt considers Suggests to be important - they won t be automatically installed, but will not be automatically removed. In Ubuntu, we unset that policy on release upgrades to normalize the systems. The reasoning for that is simple: While you may have grown to use s as part of foo during the release, an upgrade to the next release already is big enough that removing s is going to have less of an impact - breakage of workflows is expected between release upgrades. I believe that apt release-upgrade will benefit from both of these design choices, and in the end it boils down to a simple mantra:

Russ Allbery: Review: Chilling Effect

Review: Chilling Effect, by Valerie Valdes
Series: Chilling Effect #1
Publisher: Harper Voyager
Copyright: September 2019
Printing: 2020
ISBN: 0-06-287724-0
Format: Kindle
Pages: 420
Chilling Effect is a space opera, kind of; more on the genre classification in a moment. It is the first volume of a series, although it reaches a reasonable conclusion on its own. It was Valerie Valdes's first novel. Captain Eva Innocente's line of work used to be less than lawful, following in the footsteps of her father. She got out of that life and got her own crew and ship. Now, the La Sirena Negra and its crew do small transport jobs for just enough money to stay afloat. Or, maybe, a bit less than that, when the recipient of a crate full of psychic escape-artist cats goes bankrupt before she can deliver it and get paid. It's a marginal and tenuous life, but at least she isn't doing anything shady. Then the Fridge kidnaps her sister. The Fridge is a shadowy organization of extortionists whose modus operandi is to kidnap a family member of their target, stuff them in cryogenic suspension, and demand obedience lest the family member be sold off as indentured labor after a few decades as a popsicle. Eva will be given missions that she and her crew have to perform. If she performs them well, she will pay off the price of her sister's release. Eventually. Oh, and she's not allowed to tell anyone. I found it hard to place the subgenre of this novel more specifically than comedy-adventure. The technology fits space opera: there are psychic cats, pilots who treat ships as extensions of their own body, brain parasites, a random intergalactic warlord, and very few attempts to explain anything with scientific principles. However, the stakes aren't on the scale that space opera usually goes for. Eva and her crew aren't going to topple governments or form rebellions. They're just trying to survive in a galaxy full of abusive corporations, dodgy clients, and the occasional alien who requires you to carry extensive documentation to prove that you can't be hunted for meat. It is also, as you might guess from that description, occasionally funny. That part of the book didn't mesh for me. Eva is truly afraid for her sister, and some of the events in the book are quite sinister, but the antagonist is an organization called The Fridge that puts people in fridges. Sexual harassment in a bar turns into obsessive stalking by a crazed intergalactic warlord who frequently interrupts the plot by randomly blasting things with his fleet, which felt like something from Hitchhiker's Guide to the Galaxy. The stakes for Eva, and her frustrations at being dragged back into a life she escaped, felt too high for the wacky, comic descriptions of the problems she gets into. My biggest complaint, though, is that the plot is driven by people not telling other people critical information they should know. Eva is keeping major secrets from her crew for nearly the entire book. Other people are also keeping information from Eva. There is a romance subplot driven almost entirely by both parties refusing to talk to each other about the existence of a romance subplot. For some people, this is catnip, but it's one of my least favorite fictional tropes and I found much of the book both frustrating and stressful. Fictional characters keeping important secrets from each other apparently raises my blood pressure. One of the things I did like about this book is that Eva is Hispanic and speaks like it. She resorts to Spanish frequently for curses, untranslatable phrases, aphorisms, derogatory comments, and similar types of emotional communication that don't feel right in a second language. Most of the time one can figure out the meaning from context, but Valdes doesn't feel obligated to hold the reader's hand and explain everything. I liked that. I think this approach is more viable in these days of ebook readers that can attempt translations on demand, and I think it does a lot to make Eva feel like a real person. I think the characters are the best part of this book, once one gets past the frustration of their refusal to talk to each other. Eva and the alien ship engineer get the most screen time, but Pink, Eva's honest-to-a-fault friend, was probably my favorite character. I also really enjoyed Min, the ship pilot whose primary goal is to be able to jack into the ship and treat it as her body, and otherwise doesn't particularly care about the rest of the plot as long as she gets paid. A lot of books about ship crews like this one lean hard into found family. This one felt more like a group of coworkers, with varying degrees of friendship and level of interest in their shared endeavors, but without the too-common shorthand of making the less-engaged crew members either some type of villain or someone who needs to be drawn out and turned into a best friend or love interest. It's okay for a job to just be a job, even if it's one where you're around the same people all the time. People who work on actual ships do it all the time. This is a half-serious, half-comic action romp that turned out to not be my thing, but I can see why others will enjoy it. Be prepared for a whole lot of communication failures and an uneven emotional tone, but if you're looking for a space-ships-and-aliens story that doesn't take itself very seriously and has some vague YA vibes, this may work for you. Followed by Prime Deceptions, although I didn't like this well enough to read on. Rating: 6 out of 10

6 October 2023

Emanuele Rocca: Custom Debian Installer and Kernel on a USB stick

There are many valid reasons to create a custom Debian Installer image. You may need to pass some special arguments to the kernel, use a different GRUB version, automate the installation by means of preseeding, use a custom kernel, or modify the installer itself.
If you have a EFI system, which is probably the case in 2023, there is no need to learn complex procedures in order to create a custom Debian Installer stick.
The source of many frustrations is that the ISO format for CDs/DVDs is read-only, but you can just create a VFAT filesystem on a USB stick, copy all ISO contents onto the stick itself, and modify things at will.

Create a writable USB stick
First create a FAT32 filesystem on the removable device and mount it. The device is sdX in the example.
$ sudo parted --script /dev/sdX mklabel msdos
$ sudo parted --script /dev/sdX mkpart primary fat32 0% 100%
$ sudo mkfs.vfat /dev/sdX1
$ sudo mount /dev/sdX1 /mnt/data/
Then copy to the USB stick the installer ISO you would like to modify, debian-testing-amd64-netinst.iso here.
$ sudo kpartx -v -a debian-testing-amd64-netinst.iso
# Mount the first partition on the ISO and copy its contents to the stick
$ sudo mount /dev/mapper/loop0p1 /mnt/cdrom/
$ sudo rsync -av /mnt/cdrom/ /mnt/data/
$ sudo umount /mnt/cdrom
# Same story with the second partition on the ISO
$ sudo mount /dev/mapper/loop0p2 /mnt/cdrom/
$ sudo rsync -av /mnt/cdrom/ /mnt/data/
$ sudo umount /mnt/cdrom
$ sudo kpartx -d debian-testing-amd64-netinst.iso
$ sudo umount /mnt/data
Now try booting from the USB stick just to verify that everything went well and we can start customizing the image.

Boot loader, preseeding, installer hacks
The easiest things we can change now are the shim, GRUB, and GRUB s configuration. The USB stick contains the shim under /EFI/boot/bootx64.efi, while GRUB is at /EFI/boot/grubx64.efi. This means that if you want to test a different shim / GRUB version, you just replace the relevant files. That s it. Take for example /usr/lib/grub/x86_64-efi/monolithic/grubx64.efi from the package grub-efi-amd64-bin, or the signed version from grub-efi-amd64-signed and copy them under /EFI/boot/grubx64.efi. Or perhaps you want to try out systemd-boot? Then take /usr/lib/systemd/boot/efi/systemd-bootx64.efi from the package systemd-boot-efi, copy it to /EFI/boot/bootx64.efi and you re good to go. Figuring out the right systemd-boot configuration needed to start the Installer is left as an exercise.
By editing /boot/grub/grub.cfg you can pass arbitrary arguments to the kernel and the Installer itself. See the official Installation Guide for a comprehensive list of boot parameters.
One very commong thing to do is automating the installation using a preseed file. Add the following to the kernel command line: preseed/file=/cdrom/preseed.cfg and create a /preseed.cfg file on the USB stick. As a little example:
d-i time/zone select Europe/Rome
d-i passwd/root-password this-is-the-root-password
d-i passwd/root-password-again this-is-the-root-password
d-i passwd/user-fullname string Emanuele Rocca
d-i passwd/username string ema
d-i passwd/user-password password lol-haha-uh
d-i passwd/user-password-again password lol-haha-uh
d-i apt-setup/no_mirror boolean true
d-i popularity-contest/participate boolean true
tasksel tasksel/first multiselect standard
See Steve McIntyre s awesome page with the full list of available settings and their description: https://preseed.einval.com/debian-preseed/.
Two noteworthy settings are early_command and late_command. They can be used to execute arbitrary commands and provide thus extreme flexibility! You can go as far as replacing parts of the installer with a sed command, or maybe wgetting an entirely different file. This is a fairly easy way to test minor Installer patches. As an example, I ve once used this to test a patch to grub-installer:
d-i partman/early_command string wget https://people.debian.org/~ema/grub-installer-1035085-1 -O /usr/bin/grub-installer
Finally, the initrd contains all early stages of the installer. It s easy to unpack it, modify whatever component you like, and repack it. Say you want to change a given udev rule:
$ mkdir /tmp/new-initrd
$ cd /tmp/new-initrd
$ zstdcat /mnt/data/install.a64/initrd.gz   sudo cpio -id
$ vi lib/udev/rules.d/60-block.rules
$ find .   cpio -o -H newc   zstd --stdout > /mnt/data/install.a64/initrd.gz

Custom udebs
From a basic architectural standpoint the Debian Installer can be seen as an initrd that loads a series of special Debian packages called udebs. In the previous section we have seen how to (ab)use early_command to replace one of the scripts used by the Installer, namely grub-installer. It turns out that such script is installed by a udeb, so let s do things right and build a new Installer ISO with our custom grub udeb.
Fetch the code for the grub-installer udeb, make your changes and build it with a classic dpkg-buildpackage -rfakeroot.
Then get the Installer code and install all dependencies:
$ git clone https://salsa.debian.org/installer-team/debian-installer/
$ cd debian-installer/
$ sudo apt build-dep .
Now add the grub-installer udeb to the localudebs directory and create a new netboot image:
$ cp /path/to/grub-installer_1.198_arm64.udeb build/localudebs/
$ cd build
$ fakeroot make clean_netboot build_netboot
Give it some time, soon enough you ll have a brand new ISO to test under dest/netboot/mini.iso.

Custom kernel
Perhaps there s a kernel configuration option you need to enable, or maybe you need a more recent kernel version than what is available in sid.
The Debian Linux Kernel Handbook has all the details for how to do things properly, but here s a quick example.
Get the Debian kernel packaging from salsa and generate the upstream tarball:
$ git clone https://salsa.debian.org/kernel-team/linux/
$ ./debian/bin/genorig.py https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
For RC kernels use the repo from Linus instead of linux-stable.
Now do your thing, for instance change a config setting by editing debian/config/amd64/config. Don t worry about where you put it in the file, there s a tool from https://salsa.debian.org/kernel-team/kernel-team to fix that:
$ /path/to/kernel-team/utils/kconfigeditor2/process.py .
Now build your kernel:
$ export MAKEFLAGS=-j$(nproc)
$ export DEB_BUILD_PROFILES='pkg.linux.nokerneldbg pkg.linux.nokerneldbginfo pkg.linux.notools nodoc'
$ debian/rules orig
$ debian/rules debian/control
$ dpkg-buildpackage -b -nc -uc
After some time, if everything went well, you should get a bunch of .deb files as well as a .changes file, linux_6.6~rc3-1~exp1_arm64.changes here. To generate the udebs used by the Installer you need to first get a linux-signed .dsc file, and then build it with sbuild in this example:
$ /path/to/kernel-team/scripts/debian-test-sign linux_6.6~rc3-1~exp1_arm64.changes
$ sbuild --dist=unstable --extra-package=$PWD linux-signed-arm64_6.6~rc3+1~exp1.dsc
Excellent, now you should have a ton of .udebs. To build a custom installer image with this kernel, copy them all under debian-installer/build/localudebs/ and then run fakeroot make clean_netboot build_netboot as described in the previous section. In case you are trying to use a different kernel version from what is currently in sid, you will have to install the linux-image package on the system building the ISO, and change LINUX_KERNEL_ABI in build/config/common. The linux-image dependency in debian/control probably needs to be tweaked as well.
That s it, the new Installer ISO should boot with your custom kernel!
There is going to be another minor obstacle though, as anna will complain that your new kernel cannot be found in the archive. Copy the kernel udebs you have built onto a vfat formatted USB stick, switch to a terminal, and install them all with udpkg:
~ # udpkg -i *.udeb
Now the installation should proceed smoothly.

Russ Allbery: Review: The Far Reaches

Review: The Far Reaches, edited by John Joseph Adams
Publisher: Amazon Original Stories
Copyright: June 2023
ISBN: 1-6625-1572-3
ISBN: 1-6625-1622-3
ISBN: 1-6625-1503-0
ISBN: 1-6625-1567-7
ISBN: 1-6625-1678-9
ISBN: 1-6625-1533-2
Format: Kindle
Pages: 219
Amazon has been releasing anthologies of original short SFF with various guest editors, free for Amazon Prime members. I previously tried Black Stars (edited by Nisi Shawl and Latoya Peterson) and Forward (edited by Blake Crouch). Neither were that good, but the second was much worse than the first. Amazon recently released a new collection, this time edited by long-standing SFF anthology editor John Joseph Adams and featuring a new story by Ann Leckie, which sounded promising enough to give them another chance. The definition of insanity is doing the same thing over and over again and expecting different results. As with the previous anthologies, each story is available separately for purchase or Amazon Prime "borrowing" with separate ISBNs. The sidebar cover is for the first in the sequence. Unlike the previous collections, which were longer novelettes or novellas, my guess is all of these are in the novelette range. (I did not do a word count.) If you're considering this anthology, read the Okorafor story ("Just Out of Jupiter's Reach"), consider "How It Unfolds" by James S.A. Corey, and avoid the rest. "How It Unfolds" by James S.A. Corey: Humans have invented a new form of physics called "slow light," which can duplicate any object that is scanned. The energy expense is extremely high, so the result is not a post-scarcity paradise. What the technology does offer, however, is a possible route to interstellar colonization: duplicate a team of volunteers and a ship full of bootstrapping equipment, and send copies to a bunch of promising-looking exoplanets. One of them might succeed. The premise is interesting. The twists Corey adds on top are even better. What can be duplicated once can be duplicated again, perhaps with more information. This is a lovely science fiction idea story that unfortunately bogs down because the authors couldn't think of anywhere better to go with it than relationship drama. I found the focus annoying, but the ideas are still very neat. (7) "Void" by Veronica Roth: A maintenance worker on a slower-than-light passenger ship making the run between Sol and Centauri unexpectedly is called to handle a dead body. A passenger has been murdered, two days outside the Sol system. Ace is in no way qualified to investigate the murder, nor is it her job, but she's watched a lot of crime dramas and she has met the victim before. The temptation to start poking around is impossible to resist. It's been a long time since I've read a story built around the differing experiences of time for people who stay on planets and people who spend most of their time traveling at relativistic speeds. It's a bit of a retro idea from an earlier era of science fiction, but it's still a good story hook for a murder mystery. None of the characters are that memorable and Roth never got me fully invested in the story, but this was still a pleasant way to pass the time. (6) "Falling Bodies" by Rebecca Roanhorse: Ira is the adopted son of a Genteel senator. He was a social experiment in civilizing the humans: rescue a human orphan and give him the best of Genteel society to see if he could behave himself appropriately. The answer was no, which is how Ira finds himself on Long Reach Station with a parole officer and a schooling opportunity, hopefully far enough from his previous mistakes for a second chance. Everyone else seems to like Rebecca Roanhorse's writing better than I do, and this is no exception. Beneath the veneer of a coming-of-age story with a twist of political intrigue, this is brutal, depressing, and awful, with an ending that needs a lot of content warnings. I'm sorry that I read it. (3) "The Long Game" by Ann Leckie: The Imperial Radch trilogy are some of my favorite science fiction novels of all time, but I am finding Leckie's other work a bit hit and miss. I have yet to read a novel of hers that I didn't like, but the short fiction I've read leans more heavily into exploring weird and alien perspectives, which is not my favorite part of her work. This story is firmly in that category: the first-person protagonist is a small tentacled alien creature, a bit like a swamp-dwelling octopus. I think I see what Leckie is doing here: balancing cynicism and optimism, exploring how lifespans influence thinking and planning, and making some subtle points about colonialism. But as a reading experience, I didn't enjoy it. I never liked any of the characters, and the conclusion of the story is the unsettling sort of main-character optimism that seems rather less optimistic to the reader. (4) "Just Out of Jupiter's Reach" by Nnedi Okorafor: K rm n scientists have found a way to grow living ships that can achieve a symbiosis with a human pilot, but the requirements for that symbiosis are very strict and hard to predict. The result was a planet-wide search using genetic testing to find the rare and possibly nonexistent matches. They found seven people. The deal was simple: spend ten years in space, alone, in her ship. No contact with any other human except at the midpoint, when the seven ships were allowed to meet up for a week. Two million euros a year, for as long as she followed the rules, and the opportunity to be part of a great experiment, providing data that will hopefully lead to humans becoming a spacefaring species. The core of this story is told during the seven days in the middle of the mission, and thus centers on people unfamiliar with human contact trying to navigate social relationships after five years in symbiotic ships that reshape themselves to their whims and personalities. The ships themselves link so that the others can tour, which offers both a good opportunity for interesting description and a concretized metaphor about meeting other people. I adore symbiotic spaceships, so this story had me at the premise. The surface plot is very psychological, and I didn't entirely click with it, but the sense of wonder vibes beneath that surface were wonderful. It also feels fresh and new: I've seen most of the ideas before, but not presented or written this way, or approached from quite this angle. Definitely the best story of the anthology. (8) "Slow Time Between the Stars" by John Scalzi: This, on the other hand, was a complete waste of time, redeemed only by being the shortest "story" in the collection. "Story" is generous, since there's only one character and a very dry, linear plot that exists only to make a philosophical point. "Speculative essay" may be closer. The protagonist is the artificial intelligence responsible for Earth's greatest interstellar probe. It is packed with a repository of all of human knowledge and the raw material to create life. Its mission is to find an exoplanet capable of sustaining that life, and then recreate it and support it. The plot, such as it is, follows the AI's decision to abandon that mission and cut off contact with Earth, for reasons that it eventually explains. Every possible beat of this story hit me wrong. The sense of wonder attaches to the most prosaic things and skips over the moments that could have provoked real wonder. The AI is both unbelievable and irritating, with all of the smug self-confidence of an Internet reply guy. The prose is overwrought in all the wrong places ("the finger of God, offering the spark to animate the dirt of another world" would totally be this AI's profile quote under their forum avatar). The only thing I liked about the story is the ethical point that it slowly meanders into, which I think I might agree with and at least find plausible. But it's delivered by the sort of character I would actively leave rooms to avoid, in a style that's about as engrossing as a tax form. Avoid. (2) Rating: 5 out of 10

27 September 2023

Antoine Beaupr : How big is Debian?

Now this was quite a tease! For those who haven't seen it, I encourage you to check it out, it has a nice photo of a Debian t-shirt I did not know about, to quote the Fine Article:
Today, when going through a box of old T-shirts, I found the shirt I was looking for to bring to the occasion: [...] For the benefit of people who read this using a non-image-displaying browser or RSS client, they are respectively:
   10 years
  100 countries
 1000 maintainers
10000 packages
and
        1 project
       10 architectures
      100 countries
     1000 maintainers
    10000 packages
   100000 bugs fixed
  1000000 installations
 10000000 users
100000000 lines of code
20 years ago we celebrated eating grilled meat at J0rd1 s house. This year, we had vegan tostadas in the menu. And maybe we are no longer that young, but we are still very proud and happy of our project! Now How would numbers line up today for Debian, 20 years later? Have we managed to get the bugs fixed line increase by a factor of 10? Quite probably, the lines of code we also have, and I can only guess the number of users and installations, which was already just a wild guess back then, might have multiplied by over 10, at least if we count indirect users and installs as well
Now I don't know about you, but I really expected someone to come up with an answer to this, directly on Debian Planet! I have patiently waited for such an answer but enough is enough, I'm a Debian member, surely I can cull all of this together. So, low and behold, here are the actual numbers from 2023! So it doesn't line up as nicely, but it looks something like this:
         1 project
        10 architectures
        30 years
       100 countries (actually 63, but we'd like to have yours!)
      1000 maintainers (yep, still there!)
     35000 packages
    211000 *binary* packages
   1000000 bugs fixed
1000000000 lines of code
 uncounted installations and users, we don't track you
So maybe the the more accurate, rounding to the nearest logarithm, would look something like:
         1 project
        10 architectures
       100 countries (actually 63, but we'd like to have yours!)
      1000 maintainers (yep, still there!)
    100000 packages
   1000000 bugs fixed
1000000000 lines of code
 uncounted installations and users, we don't track you
I really like how the "packages" and "bugs fixed" still have an order of magnitude between them there, but that the "bugs fixed" vs "lines of code" have an extra order of magnitude, that is we have fixed ten times less bugs per line of code since we last did this count, 20 years ago. Also, I am tempted to put 100 years in there, but that would be rounding up too much. Let's give it another 30 years first. Hopefully, some real scientist is going to balk at this crude methodology and come up with some more interesting numbers for the next t-shirt. Otherwise I'm available for bar mitzvahs and children parties.

21 September 2023

Jonathan Carter: DebConf23

I very, very nearly didn t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel. This is just everything in chronological order, more or less, it s the only way I could write it.

DebCamp I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn t make any progress on catching up on the packaging work I wanted to do. I ll still post what I intended here, I ll try to take a few days to focus on these some time next month: Calamares / Debian Live stuff:
  • #980209 installation fails at the install boot loader phase
  • #1021156 calamares-settings-debian: Confusing/generic program names
  • #1037299 Install Debian -> Untrusted application launcher
  • #1037123 Minimal HD space required too small for some live images
  • #971003 Console auto-login doesn t work with sysvinit
At least Calamares has been trixiefied in testing, so there s that! Desktop stuff:
  • #1038660 please set a placeholder theme during development, different from any release
  • #1021816 breeze: Background image not shown any more
  • #956102 desktop-base: unwanted metadata within images
  • #605915 please mtheake it a non-native package
  • #681025 Put old themes in a new package named desktop-base-extra
  • #941642 desktop-base: split theme data files and desktop integrations in separate packages
The Egg theme that I want to develop for testing/unstable is based on Juliette Taka s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn t quite hatched yet. Get it? (for #1038660) Debian Social:
  • Set up Lemmy instance
    • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
  • Migrate PeerTube to new server
    • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.
Loopy: I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn t too horrible. There s always another DebConf to try again, right?
So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.

DebConf Bits From the DPL I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page). I mostly covered:
  • A very quick introduction of myself (I ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
  • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
  • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
  • I looked forward to Debian 13 (trixie!), and how we re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
  • I made some comments about Enterprise Linux as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.
Job Fair I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections! Cheese & Wine Due to state laws and alcohol licenses, we couldn t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn t quite as big or as fun as our usual C&W parties since we couldn t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright. Day Trip I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip s organiser underestimated how long it would take between the points on the route (all in all it wasn t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power. Photos available in the DebConf23 public git repository. Losing a beloved Debian Developer during DebConf To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system. Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public. We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf. A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.
Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see. Abraham, or Abru as he was called by some people (which I like because bru in Afrikaans is like bro in English, not sure if that s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me. I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he d achieve in the future. Unfortunately, we was taken away from us too soon. Poetry Evening Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song Return to Ithaka and always wondered what it was about, so needless to say, that was another rabbit hole at some point. Group Photo Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar s website.
BoFs I didn t attend nearly as many talks this DebConf as I would ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs. In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.
If you got one of these Cheese & Wine bags from DebConf, that s from the South African local group!
In the Debian.net BoF, we discussed the Debian.net hosting service, where Debian pays for VMs hosted for projects by individual DDs on Debian.net. The idea is that we start some form of census that monitors the services, whether they re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from rsync.net that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this. In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it s even feasible. Some services haven t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven t had any notable incidents yet. WordPress now has improved fediverse support, it s unclear whether it works on a multi-site instance yet, I ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio. More Information Overload There s so much that happens at DebConf, it s tough to take it all in, and also, to find time to write about all of it, but I ll mention a few more things that are certainly worth of note. During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this! I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian. I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.
Some hopefully harmless soldering.
Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better. Food Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a fruitful experience? This might catch on at home too less dishes to take care of! Special thanks to the DebConf23 Team I think this may have been one of the toughest DebConfs to organise yet, and I don t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did. Back to my nest I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.
I ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

15 September 2023

John Goerzen: How Gapped is Your Air?

Sometimes we want better-than-firewall security for things. For instance:
  1. An industrial control system for a municipal water-treatment plant should never have data come in or out
  2. Or, a variant of the industrial control system: it should only permit telemetry and monitoring data out, and nothing else in or out
  3. A system dedicated to keeping your GPG private keys secure should only have material to sign (or decrypt) come in, and signatures (or decrypted data) go out
  4. A system keeping your tax records should normally only have new records go in, but may on occasion have data go out (eg, to print a copy of an old record)
In this article, I ll talk about the high side (the high-security or high-sensitivity systems) and the low side (the lower-sensitivity or general-purpose systems). For the sake of simplicity, I ll assume the high side is a single machine, but it could as well be a whole network. Let s focus on examples 3 and 4 to make things simpler. Let s consider the primary concern to be data exfiltration (someone stealing your data), with a secondary concern of data integrity (somebody modifying or destroying your data). You might think the safest possible approach is Airgapped that is, there is literal no physical network connection to the machine at all. This help! But then, the problem becomes: how do we deal with the inevitable need to legitimately get things on or off of the system? As I wrote in Dead USB Drives Are Fine: Building a Reliable Sneakernet, by using tools such as NNCP, you can certainly create a sneakernet : using USB drives as transport. While this is a very secure setup, as with most things in security, it s less than perfect. The Wikipedia airgap article discusses some ways airgapped machines can still be exploited. It mentions that security holes relating to removable media have been exploited in the past. There are also other ways to get data out; for instance, Debian ships with gensio and minimodem, both of which can transfer data acoustically. But let s back up and think about why we think of airgapped machines as so much more secure, and what the failure modes of other approaches might be.

What about firewalls? You could very easily set up high-side machine that is on a network, but is restricted to only one outbound TCP port. There could be a local firewall, and perhaps also a special port on an external firewall that implements the same restrictions. A variant on this approach would be two computers connected directly by a crossover cable, though this doesn t necessarily imply being more secure. Of course, the concern about a local firewall is that it could potentially be compromised. An external firewall might too; for instance, if your credentials to it were on a machine that got compromised. This kind of dual compromise may be unlikely, but it is possible. We can also think about the complexity in a network stack and firewall configuration, and think that there may be various opportunities to have things misconfigured or buggy in a system of that complexity. Another consideration is that data could be sent at any time, potentially making it harder to detect. On the other hand, network monitoring tools are commonplace. On the other hand, it is convenient and cheap. I use a system along those lines to do my backups. Data is sent, gpg-encrypted and then encrypted again at the NNCP layer, to the backup server. The NNCP process on the backup server runs as an untrusted user, and dumps the gpg-encrypted files to a secure location that is then processed by a cron job using Filespooler. The backup server is on a dedicated firewall port, with a dedicated subnet. The only ports allowed out are for NNCP and NTP, and offsite backups. There is no default gateway. Not even DNS is permitted out (the firewall does the appropriate redirection). There is one pinhole allowed out, where a subset of the backup data is sent offsite. I initially used USB drives as transport, and it had no network connection at all. But there were disadvantages to doing this for backups particularly that I d have no backups for as long as I d forget to move the drives. The backup system also would have clock drift, and the offsite backup picture was more challenging. (The clock drift was a problem because I use 2FA on the system; a password, plus a TOTP generated by a Yubikey) This is pretty good security, I d think. What are the weak spots? Well, if there were somehow a bug in the NNCP client, and the remote NNCP were compromised, that could lead to a compromise of the NNCP account. But this itself would accomplish little; some other vulnerability would have to be exploited on the backup server, because the NNCP account can t see plaintext data at all. I use borgbackup to send a subset of backup data offsite over ssh. borgbackup has to run as root to be able to access all the files, but the ssh it calls runs as a separate user. A ssh vulnerability is therefore unlikely to cause much damage. If, somehow, the remote offsite system were compromised and it was able to exploit a security issue in the local borgbackup, that would be a problem. But that sounds like a remote possibility. borgbackup itself can t even be used over a sneakernet since it is not asynchronous. A more secure solution would probably be using something like dar over NNCP. This would eliminate the ssh installation entirely, and allow a complete isolation between the data-access and the communication stacks, and notably not require bidirectional communication. Logic separation matters too. My Roundup of Data Backup and Archiving Tools may be helpful here. Other attack vectors could be a vulnerability in the kernel s networking stack, local root exploits that could be combined with exploiting NNCP or borgbackup to gain root, or local misconfiguration that makes the sandboxes around NNCP and borgbackup less secure. Because this system is in my basement in a utility closet with no chairs and no good place for a console, I normally manage it via a serial console. While it s a dedicated line between the system and another machine, if the other machine is compromised or an adversary gets access to the physical line, credentials (and perhaps even data) could leak, albeit slowly. But we can do much better with serial lines. Let s take a look.

Serial lines Some of us remember RS-232 serial lines and their once-ubiquitous DB-9 connectors. Traditionally, their speed maxxed out at 115.2Kbps. Serial lines have the benefit that they can be a direct application-to-application link. In my backup example above, a serial line could directly link the NNCP daemon on one system with the NNCP caller on another, with no firewall or anything else necessary. It is simply up to those programs to open the serial device appropriately. This isn t perfect, however. Unlike TCP over Ethernet, a serial line has no inherent error checking. Modern programs such as NNCP and ssh assume that a lower layer is making the link completely clean and error-free for them, and will interpret any corruption as an attempt to tamper and sever the connection. However, there is a solution to that: gensio. In my page Using gensio and ser2net, I discuss how to run NNCP and ssh over gensio. gensio is a generic framework that can add framing, error checking, and retransmit to an unreliable link such as a serial port. It can also add encryption and authentication using TLS, which could be particularly useful for applications that aren t already doing that themselves. More traditional solutions for serial communications have their own built-in error correction. For instance, UUCP and Kermit both were designed in an era of noisy serial lines and might be an excellent fit for some use cases. The ZModem protocol also might be, though it offers somewhat less flexibility and automation than Kermit. I have found that certain USB-to-serial adapters by Gearmo will actually run at up to 2Mbps on a serial line! Look for the ones on their spec pages with a FTDI chipset rated at 920Kbps. It turns out they can successfully be driven faster, especially if gensio s relpkt is used. I ve personally verified 2Mbps operation (Linux port speed 2000000) on Gearmo s USA-FTDI2X and the USA-FTDI4X. (I haven t seen any single-port options from Gearmo with the 920Kbps chipset, but they may exist). Still, even at 2Mbps, speed may well be a limiting factor with some applications. If what you need is a console and some textual or batch data, it s probably fine. If you are sending 500GB backup files, you might look for something else. In theory, this USB to RS-422 adapter should work at 10Mbps, but I haven t tried it. But if the speed works, running a dedicated application over a serial link could be a nice and fairly secure option. One of the benefits of the airgapped approach is that data never leaves unless you are physically aware of transporting a USB stick. Of course, you may not be physically aware of what is ON that stick in the event of a compromise. This could easily be solved with a serial approach by, say, only plugging in the cable when you have data to transfer.

Data diodes A traditional diode lets electrical current flow in only one direction. A data diode is the same concept, but for data: a hardware device that allows data to flow in only one direction. This could be useful, for instance, in the tax records system that should only receive data, or the industrial system that should only send it. Wikipedia claims that the simplest kind of data diode is a fiber link with transceivers connected in only one direction. I think you could go one simpler: a serial cable with only ground and TX connected at one end, wired to ground and RX at the other. (I haven t tried this.) This approach does have some challenges:
  • Many existing protocols assume a bidirectional link and won t be usable
  • There is a challenge of confirming data was successfully received. For a situation like telemetry, maybe it doesn t matter; another observation will come along in a minute. But for sending important documents, one wants to make sure they were properly received.
In some cases, the solution might be simple. For instance, with telemetry, just writing out data down the serial port in a simple format may be enough. For sending files, various mitigations, such as sending them multiple times, etc., might help. You might also look into FEC-supporting infrastructure such as blkar and flute, but these don t provide an absolute guarantee. There is no perfect solution to knowing when a file has been successfully received if the data communication is entirely one-way.

Audio transport I hinted above that minimodem and gensio both are software audio modems. That is, you could literally use speakers and microphones, or alternatively audio cables, as a means of getting data into or out of these systems. This is pretty limited; it is 1200bps, and often half-duplex, and could literally be disrupted by barking dogs in some setups. But hey, it s an option.

Airgapped with USB transport This is the scenario I began with, and named some of the possible pitfalls above as well. In addition to those, note also that USB drives aren t necessarily known for their error-free longevity. Be prepared for failure.

Concluding thoughts I wanted to lay out a few things in this post. First, that simply being airgapped is generally a step forward in security, but is not perfect. Secondly, that both physical and logical separation matter. And finally, that while tools like NNCP can make airgapped-with-USB-drive-transport a doable reality, there are also alternatives worth considering especially serial ports, firewalled hard-wired Ethernet, data diodes, and so forth. I think serial links, in particular, have been largely forgotten these days. Note: This article also appears on my website, where it may be periodically updated.

12 September 2023

John Goerzen: A Maze of Twisty Little Pixels, All Tiny

Two years ago, I wrote Managing an External Display on Linux Shouldn t Be This Hard. Happily, since I wrote that post, most of those issues have been resolved. But then you throw HiDPI into the mix and it all goes wonky. If you re running X11, basically the story is that you can change the scale factor, but it only takes effect on newly-launched applications (which means a logout/in because some of your applications you can t really re-launch). That is a problem if, like me, you sometimes connect an external display that is HiDPI, sometimes not, or your internal display is HiDPI but others aren t. Wayland is far better, supporting on-the-fly resizes quite nicely. I ve had two devices with HiDPI displays: a Surface Go 2, and a work-issued Thinkpad. The Surface Go 2 is my ultraportable Linux tablet. I use it sparingly at home, and rarely with an external display. I just put Gnome on it, in part because Gnome had better on-screen keyboard support at the time, and left it at that. On the work-issued Thinkpad, I really wanted to run KDE thanks to its tiling support (I wound up using bismuth with it). KDE was buggy with Wayland at the time, so I just stuck with X11 and ran my HiDPI displays at lower resolutions and lived with the fuzziness. But now that I have a Framework laptop with a HiDPI screen, I wanted to get this right. I tried both Gnome and KDE. Here are my observations with both: Gnome I used PaperWM with Gnome. PaperWM is a tiling manager with a unique horizontal ribbon approach. It grew on me; I think I would be equally at home, or maybe even prefer it, to my usual xmonad-style approach. Editing the active window border color required editing ~/.local/share/gnome-shell/extensions/paperwm@hedning:matrix.org/stylesheet.css and inserting background-color and border-color items in the paperwm-selection section. Gnome continues to have an absolutely terrible picture for configuring things. It has no less than four places to make changes (Settings, Tweaks, Extensions, and dconf-editor). In many cases, configuration for a given thing is split between Settings and Tweaks, and sometimes even with Extensions, and then there are sometimes options that are only visible in dconf. That is, where the Gnome people have even allowed something to be configurable. Gnome installs a power manager by default. It offers three options: performance, balanced, and saver. There is no explanation of the difference between them. None. What is it setting when I change the pref? A maximum frequency? A scaling governor? A balance between performance and efficiency cores? Not only that, but there s no way to tell it to just use performance when plugged in and balanced or saver when on battery. In an issue about adding that, a Gnome dev wrote We re not going to add a preference just because you want one . KDE, on the other hand, aside from not mucking with your system s power settings in this way, has a nice panel with on AC and on battery and you can very easily tweak various settings accordingly. The hostile attitude from the Gnome developers in that thread was a real turnoff. While Gnome has excellent support for Wayland, it doesn t (directly) support fractional scaling. That is, you can set it to 100%, 200%, and so forth, but no 150%. Well, unless you manage to discover that you can run gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" first. (Oh wait, does that make a FIFTH settings tool? Why yes it does.) Despite its name, that allows you to select fractional scaling under Wayland. For X11 apps, they will be blurry, a problem that is optional under KDE (more on that below). Gnome won t show the battery life time remaining on the task bar. Yikes. An extension might work in some cases. Not only that, but the Gnome battery icon frequently failed to indicate AC charging when AC was connected, a problem that didn t exist on KDE. Both Gnome and KDE support night light (warmer color temperatures at night), but Gnome s often didn t change when it should have, or changed on one display but not the other. The appindicator extension is pretty much required, as otherwise a number of applications (eg, Nextcloud) don t have their icon display anywhere. It does, however, generate a significant amount of log spam. There may be a fix for this. Unlike KDE, which has a nice inobtrusive popup asking what to do, Gnome silently automounts USB sticks when inserted. This is often wrong; for instance, if I m about to dd a Debian installer to it, I definitely don t want it mounted. I learned this the hard way. It is particularly annoying because in a GUI, there is no reason to mount a drive before the user tries to access it anyhow. It looks like there is a dconf setting, but then to actually mount a drive you have to open up Files (because OF COURSE Gnome doesn t have a nice removable-drives icon like KDE does) and it s a bunch of annoying clicks, and I didn t want to use the GUI file manager anyway. Same for unmounting; two clicks in KDE thanks to the task bar icon, but in Gnome you have to open up the file manager, unmount the drive, close the file manager again, etc. The ssh agent on Gnome doesn t start up for a Wayland session, though this is easily enough worked around. The reason I completely soured on Gnome is that after using it for awhile, I noticed my laptop fans spinning up. One core would be constantly busy. It was busy with a kworker events task, something to do with sound events. Logging out would resolve it. I believe it to be a Gnome shell issue. I could find no resolution to this, and am unwilling to tolerate the decreased battery life this implies. The Gnome summary: it looks nice out of the box, but you quickly realize that this is something of a paper-thin illusion when you try to actually use it regularly. KDE The KDE experience on Wayland was a little bit opposite of Gnome. While with Gnome, things start out looking great but you realize there are some serious issues (especially battery-eating), with KDE things start out looking a tad rough but you realize you can trivially fix them and wind up with a very solid system. Compared to Gnome, KDE never had a battery-draining problem. It will show me estimated battery time remaining if I want it to. It will do whatever I want it to when I insert a USB drive. It doesn t muck with my CPU power settings, and lets me easily define on AC vs on battery settings for things like suspend when idle. KDE supports fractional scaling, to any arbitrary setting (even with the gsettings thing above, Gnome still only supports it in 25% increments). Then the question is what to do with X11-only applications. KDE offers two choices. The first is Scaled by the system , which is also the only option for Gnome. With that setting, the X11 apps effectively run natively at 100% and then are scaled up within Wayland, giving them a blurry appearance on HiDPI displays. The advantage is that the scaling happens within Wayland, so the size of the app will always be correct even when the Wayland scaling factor changes. The other option is Apply scaling themselves , which uses native X11 scaling. This lets most X11 apps display crisp and sharp, but then if the system scaling changes, due to limitations of X11, you ll have to restart the X apps to get them to be the correct size. I appreciate the choice, and use Apply scaling by themselves because only a few of my apps aren t Wayland-aware. I did encounter a few bugs in KDE under Wayland: sddm, the display manager, would be slow to stop and cause a long delay on shutdown or reboot. This seems to be a known issue with sddm and Wayland, and is easily worked around by adding a systemd TimeoutStopSec. Konsole, the KDE terminal emulator, has weird display artifacts when using fractional scaling under Wayland. I applied some patches and rebuilt Konsole and then all was fine. The Bismuth tiling extension has some pretty weird behavior under Wayland, but a 1-character patch fixes it. On Debian, KDE mysteriously installed Pulseaudio instead of Debian s new default Pipewire, but that was easily fixed as well (and Pulseaudio also works fine). Conclusions I m sticking with KDE. Given that I couldn t figure out how to stop Gnome from deciding to eat enough battery to make my fan come on, the decision wasn t hard. But even if it weren t for that, I d have gone with KDE. Once a couple of things were patched, the experience is solid, fast, and flawless. Emacs (my main X11-only application) looks great with the self-scaling in KDE. Gimp, which I use occasionally, was terrible with the blurry scaling in Gnome. Update: Corrected the gsettings command

Valhalla's Things: How I Keep my Life in Git

Posted on September 12, 2023
git secret_cabal greet
After watching My life in git, after subversion, after CVS. from DebConf, I ve realized it s been a while since I talked about the way I keep everything1 I do in git, and I don t think I ve ever done it online, so it looked like a good time for a blog post. Beyond git itself (of course), I use a few git-related programs:
  • myrepos (also known as mr) to manage multiple git repositories with one command;
  • vcsh to make it easy to keep dot-files under git;
  • git annex to store media files (anything that is big and will not change);
  • etckeeper to keep an history of the /etc directory;
  • gitolite and cgit to host my git repositories;
and some programs that don t use git directly, but easily interact with it:
  • ansible to keep track of the system configuration of all machines;
  • lesana as a project tracker and journal and to inventory the things made of atoms that are hard 2 to store in git.
All of these programs are installed from Debian packages, on stable (plus rarely backports) or testing, depending on the machine. I m also grateful to the vcs-home people, who wrote most of the tools I use, and sometimes hang around their IRC channel. And now, on to what I m actually doing. With the git repositories I ve decided to err for too much granularity rather than too little3, so of course each project has its own repository, and so do different kinds of media files, dot-files that are related to different programs etc. Most of the repositories are hosted on two gitolite servers: one runs on the home server, for stuff that should remain private, and the other one is on my VPS for things that are public (or may become public in the future), and also has a web interface with cgit. Of course things where I m collaborating with other people are sometimes hosted elsewhere, mostly on salsa, sourcehut or on $DAYJOB related gitlab instances. The .mr directory is where everything is managed: I don t have a single .mrconfig file but a few different ones, that in turn load all files in a directory with the same name:
  • collections.mr for the media file annexes and inventories (split into different files, so that computers with little disk space can only get the inventories);
  • private.mr for stuff that should only go on my own personal machine, not on shared ones;
  • projects.mr for the actual projects, with different files for the kinds of projects (software, docs, packaging, crafts, etc.);
  • setup.mr with all of the vcsh repositories, including the one that tracks the mr files (I ll talk about the circular dependency later);
  • work.mr for repositories that are related to $DAYJOB.
Then there are the files in the .mr/machines directory, each one of which has the list of repositories that should be on every specific machine, including a generic workstation, but also specific machines such as e.g. the media center which has a custom set of repositories. The dot files from my home directory are kept in vcsh, so that it s easy to split them out into different repositories, and I m mostly used the simplest configuration described in the 30 Second How-to in its homepage; vcsh gives some commands to work on all vcsh repositories at the same time, but most of the time I work on a single repository, and use mr to act on more than one repo. The media collections are also pretty straightforward git-annex repositories, one for each kind of media (music, movies and other videos, e-books, pictures, etc.) and I don t use any auto-syncing features but simply copy and move files around between clones with the git annex copy, git annex move and git annex get commands. There isn t much to say about the project repositories (plain git), and I think that the way I use my own program lesana for inventories and project tracking is worth an article of its own, here I ll just say that the file format used has been designed (of course) to work nicely with git. On every machine I install etckeeper so that there is a history of the changes in the /etc directory, but that s only a local repository, not stored anywhere else, and is used mostly in case something breaks with an update or in similar situation. The authoritative source for the configuration of each machine is an ansible playbook (of course saved in git) which can be used to fully reconfigure the machine from a bare Debian installation. When such a reconfiguration from scratch happens, it will be in two stages: first a run of ansible does the system-wide configuration (including installing packages, creating users etc.), and then I login on the machine and run mr to set up my own home. Of course there is a chicken-and-egg problem in that I need the mr configuration to know where to get the mr configuration, and that is solved by having setup two vcsh repositories from an old tarball export: the one with the ssh configuration to access the repositories and the one with the mr files. So, after a machine has been configured with ansible what I ll actually do is to login, use vcsh pull to update those two repositories and then run mr to checkout everything else. And that s it, if you have questions on something feel free to ask me on the fediverse or via email (contacts are in the about page) Update (2023-09-12 17:00ish): The ~/.mr directory is not special for mr, it s just what I use and then I always run mr -c ~/.mr/some/suitable/file.mr, with the actual file being different whether I m registering a new repo or checking out / updating them. I could include some appropriate ~/.mr/machines/some_machine.mr in ~/.mrconfig, but I ve never bothered to do so, since it wouldn t cover all usecases anyway. Thanks to the person on #vcs-home@OFTC who asked me the question :)

  1. At least, everything that I made that is made of bits, and a diary and/or inventory of the things made of atoms.
  2. until we get a working replicator, I guess :D
  3. in time I ve consolidated a bit some of the repositories, e.g. merging the repositories for music from different sources (CD rips, legal downloads, etc.) into a single repository, but that only happened a few times, and usually I m fine with the excess of granularity.

Freexian Collaborators: Monthly report about Debian Long Term Support, August 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In August, 19 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 0.0h (out of 12.0h assigned and 2.0h from previous period), thus carrying over 14.0h to the next month.
  • Adrian Bunk did 18.5h (out of 18.5h assigned).
  • Anton Gladky did 7.5h (out of 5.0h assigned and 10.0h from previous period), thus carrying over 7.5h to the next month.
  • Bastien Roucari s did 17.0h (out of 15.5h assigned and 3.0h from previous period), thus carrying over 1.5h to the next month.
  • Ben Hutchings did 18.5h (out of 9.0h assigned and 9.5h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Emilio Pozuelo Monfort did 18.5h (out of 18.25h assigned and 0.25h from previous period).
  • Guilhem Moulin did 24.0h (out of 22.5h assigned and 1.5h from previous period).
  • Jochen Sprickerhof did 2.5h (out of 8.5h assigned and 10.0h from previous period), thus carrying over 16.0h to the next month.
  • Lee Garrett did 18.0h (out of 9.25h assigned and 9.25h from previous period), thus carrying over 0.5h to the next month.
  • Markus Koschany did 28.5h (out of 28.5h assigned).
  • Ola Lundqvist did 0.0h (out of 0h assigned and 24.0h from previous period), thus carrying over 24.0h to the next month.
  • Roberto C. S nchez did 18.5h (out of 13.0h assigned and 5.5h from previous period).
  • Santiago Ruano Rinc n did 18.5h (out of 18.25h assigned and 0.25h from previous period).
  • Sean Whitton did 7.0h (out of 10.0h assigned), thus carrying over 3.0h to the next month.
  • Sylvain Beucler did 18.5h (out of 9.75h assigned and 8.75h from previous period).
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 16.0h (out of 16.0h assigned).
  • Utkarsh Gupta did 12.25h (out of 0h assigned and 12.25h from previous period).

Evolution of the situation In August, we have released 42 DLAs. The month of August turned out to be a rather quiet month for the LTS team. Three notable updates were to bouncycastle, openssl, and zabbix. In the case of bouncycastle a flaw allowed for the possibility of LDAP injection and the openssl update corrected a resource exhaustion bug that could result in a denial of service. Zabbix, while not widely used, was the subject of several vulnerabilities which while not individually severe did combine to result in the zabbix update being of particular note. Apart from those, the LTS team continued the always ongoing work of triaging, investigating, and fixing vulnerabilities, as well as making contributions to the broader Debian and Free Software communities.

Thanks to our sponsors Sponsors that joined recently are in bold.

Next.

Previous.